Experimental Research

PSCI 2270 - Week 10

Alec Tripp and Georgiy Syunyaev

Department of Political Science, Vanderbilt University

October 31, 2024

Plan for this week



  1. Censorship in China

  2. Attitudes towards immigrants

  3. Campaign contributions and access

Summarizing experimental design



  • What is the research hypothesis?

  • What are the experimental conditions?

  • Who (or what) are the subjects?

  • How are the subjects assigned to treatment(s)?

  • In what context does the experiment take place?

  • How are outcomes measured?

  • How do researchers estimate average treatment effect?

  • Are there any threats to inference? Think about random assignment, non-interference, excludability, experimenter effects, treatment meaning

Censorship in China

Censorship in China

  • “Reverse-Engineering Censorship in China: Randomized Experimentation and Participant Observation.” by King, Pan, and Roberts (2014)
  • Summary:

    • Experimental study on what is being censored on the social media platforms in China
    • Two competing theories: Anti-government statements vs. collective action statements
    • Conduct participant observation to provide qualitative evidence on how the censorship is structured
    • Created and posted on the internet many posts varying whether it contains collective action topics and whether they have pro- or anti-government statements
    • Find strong evidence in favor of censorship of collective action statements

What is the research hypothesis?


  • Why do autocrats want to do censorship?

    • To stay in power and prevent public from overthrowing them
  • How can this be achieved with censorship?

    1. By projecting positive image of the leader \(\Rightarrow\) censor anti-government discussions
    2. By making people believe that no one is dissastisfied with the government \(\Rightarrow\) censor collective action
  • Hypothesis: Authors claim that the collective action theory is true but not the theory about positive image of the government.

What is the key motivation?


  • There was a study conducted before using the observed posts on Chinese social media

    • Findings were that collective action is censored but not the government criticism
  • Why there could be issues with this study?

    • Possible confounders
    • Possible selection bias

Experimental conditions? Subjects?


  • What is the treatment in the study? Two separate treatments

    1. Social media posts with collective action vs those without collective action potential
    2. Social media posts that take pro-government vs anti-government stance
  • Who (or what) are the subjects?

    • Not humans (!) but rather social media posts
  • Design: Factorial audit experiment

    • Factorial design: We have two separate treatments and we create experimental groups that have all possible (\(2^2\)) combinations of them: \(00\) (control), \(01\), \(10\), \(11\)
    • Audit experiment: Measure responsiveness and discrimination in bureacracy, government, other organizations. Treatments are some actions that should trigger a response (usually from government).

Treatment assignment and context


  • How did they assign the treatment?

    • Scraped the topics during the study and wrote the posts themselves
    • Randomly assigned posts during the study
  • What is the context of the study?

    • Selected 100 Chinese social media platforms and created users on them
    • 12 posts per social media with 1200 total sample size
  • Possible issues?

    • Excludability: How do they ensure posts do not differ on dimensions other than pro-/anti-government and collective/non-collective action information?
    • What is treatment: Are the topics they select actually representative of collective action posts?
    • Random assignment: Is the random procedure clear?

Outcomes? Analysis?


  • What are the outcomes they measure?

    1. Post selected for automated review
    2. Post censored (either after review or after publishing)

Outcomes explained

Outcomes? Analysis?


  • What are the outcomes they measure?

    1. Post selected for automated review
    2. Post censored (either after review or after publishing)
  • How do they analyze the data?

    • Look at differences in means for each treatment (collective action or no/pro- or anti-government) separately
    • This is the beauty of factorial designs!

Main results

Let’s discuss concerns



  • Should we be concerned about non-interference? Likely No!

  • Should we be concerned about experimenter effects? Yes!

  • Should we be concerned about excludability? Likely No!

  • Should we be concerned that topics were picked from the pool of observed posts? Not for experimental part!

  • Any other concerns?

    • Should we be concerned about writers of the experimental posts?
  • Do you think audit experiments are useful? Where else can we use them?

Attitudes towards immigrants

Attitudes towards immigrants

  • “The Hidden American Immigration Consensus: A Conjoint Analysis of Attitudes Toward Immigrants.” by Hainmueller and Hopkins (2015)
  • Summary:

    • Experimental study on preferences of Americans for types of immigrants
    • Theories: Partisan, economic and sociotropic factors could affect preferences for immigrants
    • Conduct a study where respondents are asked to compare two immigrant profiles at a time
    • Find support for overall preferences for high-skilled and educated immigrants who plan to work, but no differences across partisan attachment

What is the research hypothesis?


  • There is a perception that Democrats and Republicans vary in terms of their preferences on immigration policy

    • Democrats are expected to be more accepting of immigrants regardless of their profiles
  • Alternative theories

    • Economic: Job threat as a factor that affects prefferences on immigration
    • Sociotropic: Preferences for immigrants who are more likely to contribute to the economy
    • Norms-based: Preferences for immigrants who are more likely to assimilate
    • Prejudice: Preferences for white/European immigrants

Experimental conditions? Subjects?


  • What is the treatment in the study? Many separate treatments

    • Prior trips; Reasons for application; Country of origin; Language skills; Profession; Job experience; Employment plans; Education level; Gender… 🤯
  • Who (or what) are the subjects?

    • Americans in nationally representative panel study
  • Design: Conjoint experiment

    • \(\approx\) factorial design on steroids!
    • Borrowed from marketing research where it is used to test different product features. Allows for testing of large list of features
    • Ask people to rate or compare different profiles (not necessary of humans, e.g. policies/laws/events)

Treatment assignment and context


  • How did they assign the treatment?

    • Showed two profiles with randomly assigned features (every feature is randomized separately)
    • There is 900,000 possible profiles, so not all profiles are used…
    • Is that a problem? No, because for each feature in expectation the profiles on other features are the same
  • What is the context of the study?

    • Representative panel (i.e. interveiew each person more than once) survey in US
    • 1,714 completed first wave, 1,407 completed second wave
  • Possible issues?

    • Random assignment: Is the random procedure clear?
    • Conceptual treatment: There is a priming component in listing a feature in the table. Are we sure those factors are all relevant?

Outcomes? Analysis?



  • What are the outcomes they measure?

    1. Comparison between two profiles
    2. Rating of profiles on absolute scale
  • How do they analyze the data?

    • Look at differences in means for each feature (marginalized over other dimensions)
    • This is the beauty of factorial/conjoint designs!

Outcomes explained

Main results

More results…

Let’s discuss concerns



  • Should we be concerned about excludability? Likely No!

  • Should we be concerned about experimenter effects? Yes!

  • Should we be concerned about non-interference? Yes!

  • Should we be concerned about many tests that they run? Yes!

  • Any other concerns?

    • Too many profiles actually could be a problem if the other features are too sparce \(\Rightarrow\) ask each person to rate more than one pair of profiles
  • Do you think conjoint experiments are useful? Where else can we use them?

Campaign contributions and access

Campaign contributions and access

  • “Campaign Contributions Facilitate Access to Congressional Officials: A Randomized Field Experiment.” by Kalla and Broockman (2016)
  • Summary:

    • Investigates whether campaign contributions increase access to congressional officials
    • Conducted a field experiment with real political donors and congressional offices
    • Found that donors are more likely to receive meetings with senior policymakers

What is the research hypothesis?


  • Hypothesis: Campaign contributions increase the likelihood of gaining access to congressional officials.

  • Why is this important?

    • Understanding the influence of money in politics
    • Implications for democratic representation and policy-making

Experimental conditions? Subjects?


  • What is the treatment in the study? Disclosure of donor status

    • Disclosure of donor status in meeting request over email
  • Who (or what) are the subjects?

    • U.S. Representatives (or their office members)
  • Design: Randomized field experiment

    • Randomly assigned meeting requests to either donor or non-donor status
    • Measured the response rate and level of access granted

Treatment assignment and context


  • How did they assign the treatment?

    • Block random assignment to one of two conditions
    • Split the sample into blocks similar units and then assign within blocks
  • What is the context of the study?

    • CREDO Action initiative to build support for ban on a chemical
    • 191 U.S. Representative offices that did not cosponsor the bill

Outcomes? Analysis?


  • What are the outcomes they measure?

    1. Likelihood of receiving a meeting
    2. Level of access (e.g., meeting with senior staff or the official)
  • How do they analyze the data?

    • Compare the access rates between donor and non-donor conditions
    • Look at outcomes by type of member of the office and cumulative results

Main results

Let’s discuss concerns



  • Should we be concerned about excludability? Likely No!

  • Should we be concerned about external validity? Yes!

  • Should we be concerned about non-interference? Yes!

  • Any other concerns?

  • How might these concerns affect the interpretation of the study’s results?

References

Hainmueller, Jens, and Daniel J. Hopkins. 2015. “The Hidden American Immigration Consensus: A Conjoint Analysis of Attitudes Toward Immigrants.” American Journal of Political Science 59 (3): 529–48. https://doi.org/10.1111/ajps.12138.
Kalla, Joshua L, and David E Broockman. 2016. “Campaign Contributions Facilitate Access to Congressional Officials: A Randomized Field Experiment.” American Journal of Political Science 60 (3): 545–58.
King, Gary, Jennifer Pan, and Margaret E. Roberts. 2014. “Reverse-Engineering Censorship in China: Randomized Experimentation and Participant Observation.” Science 345 (6199): 1251722. https://doi.org/10.1126/science.1251722.